Towards a Gesture-Sound Cross-Modal Analysis

نویسندگان

  • Baptiste Caramiaux
  • Frédéric Bevilacqua
  • Norbert Schnell
چکیده

This article reports on the exploration of a method based on canonical correlation analysis (CCA) for the analysis of the relationship between gesture and sound in the context of music performance and listening. This method is a first step in the design of an analysis tool for gesture-sound relationships. In this exploration we used motion capture data recorded from subjects performing free hand movements while listening to short sound examples. We assume that even though the relationship between gesture and sound might be more complex, at least part of it can be revealed and quantified by linear multivariate regression applied to the motion capture data and audio descriptors extracted from the sound examples. After outlining the theoretical background, the article shows how the method allows for pertinent reasoning about the relationship between gesture and sound by analysing the data sets recorded from multiple and individual subjects.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning English Auxiliary Modal Verbs by Iranian Children

Modal verbs in English are challenging to learn by speakers of other languages. The purpose of thiswas to shed light on the use of gesture in learning English modal verbs by Persian-speaking children.To achieve this, 60 elementary Iranian learners, studying at some institutes in Karaj took part in thisstudy. The participants were randomly put into one experimental group and one control group. T...

متن کامل

Cross-modal Sound Mapping Using Deep Learning

We present a method for automatic feature extraction and cross-modal mapping using deep learning. Our system uses stacked autoencoders to learn a layered feature representation of the data. Feature vectors from two (or more) different domains are mapped to each other, effectively creating a cross-modal mapping. Our system can either run fully unsupervised, or it can use high-level labeling to f...

متن کامل

Hyper-shaku (Border-crossing): Towards the Multi-modal Gesture-controlled Hyper-Instrument

Hyper-shaku (Border-Crossing) is an interactive sensor environment that uses motion sensors to trigger immediate responses and generative processes augmenting the Japanese bamboo shakuhachi in both the auditory and visual domain. The latter differentiates this process from many hyper-instruments by building a performance of visual design as well as electronic music on top of the acoustic perfor...

متن کامل

Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations

We present a way to make environmental recordings controllable again by the use of continuous annotations of the high-level semantic parameter one wishes to control, e.g. wind strength or crowd excitation level. A partial annotation can be propagated to cover the entire recording via cross-modal analysis between gesture and sound by canonical time warping (CTW). The annotations serve as a descr...

متن کامل

Cross-Modal Grounding of Meaning

The current study investigates how the speaker grounds meaning for a referent by gestural repetition along with speech in daily conversation. The domain of analysis is the stretch of talk that encompasses the beginning and the end of the joint action during which a pair of similar gestures is produced by different speakers across turns to depict the same referent. A particular cross-modal groun...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009